Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Image smoothing method based on gradient surface area and sparsity constraints
LI Hui, WU Chuansheng, LIU Jun, LIU Wen
Journal of Computer Applications    2021, 41 (7): 2039-2047.   DOI: 10.11772/j.issn.1001-9081.2020081325
Abstract367)      PDF (6854KB)(187)       Save
Concerning the problems of easy loss of low-contrast edges and incomplete suppression of texture details during texture image smoothing, an image smoothing method based on gradient surface area and sparsity constraints was proposed. Firstly, the image was regarded as a two-dimensional embedded surface in three-dimensional space. On this basis, geometric characteristics of the image were analyzed and the regularization term of the gradient surface area constraint was proposed, which improves the texture suppression performance. Secondly, based on the statistical characteristics of the image, a hybrid regularization constrained image smoothing model with L 0 gradient sparseness and adaptive gradient surface area constraints was established. At last, the alternating direction multiplier method was used to solve the non-convex non-smooth optimization model efficiently. The experimental results in texture suppression, edge detection, texture enhancement and image fusion show that the proposed algorithm overcomes the defects of L 0 gradient minimization smoothing method, such as staircase effect and insufficient filtering, and is able to maintain and sharpen the significant edge contours of the image while removing a large amount of texture information.
Reference | Related Articles | Metrics
Bamboo strip surface defect detection method based on improved CenterNet
GAO Qinquan, HUANG Bingcheng, LIU Wenzhe, TONG Tong
Journal of Computer Applications    2021, 41 (7): 1933-1938.   DOI: 10.11772/j.issn.1001-9081.2020081167
Abstract822)      PDF (1734KB)(533)       Save
In bamboo strip surface defect detection, the bamboo strip defects have different shapes and messy imaging environment, and the existing target detection model based on Convolutional Neural Network (CNN) does not take advantage of the neural network when facing such specific data; moreover, the sources of bamboo strips are complicated and there exist other limited conditions, so that it is impossible to collect all types of data, resulting in a small amount of bamboo strip defect data that CNN cannot fully learn. To address these problems, a special detection network aiming at bamboo strip defects was proposed. The basic framework of the proposed network is CenterNet. In order to improve the detection performance of CenterNet in less bamboo strip defect data, an auxiliary detection module based on training from scratch was designed:when the network started training, the CenterNet part that uses the pre-training model was frozen, and the auxiliary detection module was trained from scratch according to the defect characteristics of the bamboo strips; when the loss of the auxiliary detection module stabilized, the module was intergrated with the pre-trained main part by a connection method of attention mechanism. The proposed detection network was trained and tested on the same training sets with CenterNet and YOLO v3 which is currently commonly used in industrial detection. Experimental results show that on the bamboo strip defect detection dataset, the mean Average Precision (mAP) of the proposed method is 16.45 and 9.96 percentage points higher than those of YOLO v3 and CenterNet, respectively. The proposed method can effectively detect the different shaped defects of bamboo strips without increasing too much time consumption, and has a good effect in actual industrial applications.
Reference | Related Articles | Metrics
Ship detection based on enhanced YOLOv3 under complex environments
NIE Xin, LIU Wen, WU Wei
Journal of Computer Applications    2020, 40 (9): 2561-2570.   DOI: 10.11772/j.issn.1001-9081.2020010097
Abstract680)      PDF (2506KB)(964)       Save
In order to improve the intelligence level of waterway traffic safety supervision, and further improve the positioning precision and detection accuracy in the ship detection algorithms based on deep learning, based on the traditional YOLOv3, an enhanced YOLOv3 algorithm for ship detection was proposed. First, the prediction box uncertain regression was introduced in the network prediction layer in order to predict the uncertainty information of bounding box. Second, the negative logarithm likelihood function and improved binary cross entropy function were used to redesign the loss function. Then, the K-means clustering algorithm was used to redesign the scales of prior anchor boxes according to the shape of ship, and prior anchor boxes were evenly distributed to the corresponding prediction scales. During training phase, the data augmentation strategy was used to expand the number of training samples. Finally, the Non-Maximum Suppression (NMS) algorithm with Gaussian soft threshold function was used to post-process the prediction boxes. The comparison experiments of various improved methods and different object detection algorithms were conducted on real maritime video surveillance dataset. Experimental results show that, compared to the traditional YOLOv3 algorithm, the YOLOv3 algorithm with prediction box uncertainty information has the number of False Positives (FP) reduced by 35.42%, and the number of True Positives (TP) increased by 1.83%, thus improving the accuracy. The mean Average Precision (mAP) of the enhanced YOLOv3 algorithm on ship images reaches 87.74%, which is improved by 24.12% and 23.53% respectively compared to those of the traditional YOLOv3 algorithm and Faster R-CNN algorithm. The proposed algorithm has the number of images detected per second reaches 30.70, meeting the requirement of real-time detection. Experimental results indicate that the proposed algorithm can achieve high-precision, robust and real-time detection of ships under adverse weather and conditions such as fog weather and low-light condition as well as the complex navigation backgrounds.
Reference | Related Articles | Metrics
Focused crawler method combining ontology and improved Tabu search for meteorological disaster
LIU Jingfa, GU Yaoping, LIU Wenjie
Journal of Computer Applications    2020, 40 (8): 2255-2261.   DOI: 10.11772/j.issn.1001-9081.2019122238
Abstract387)      PDF (1325KB)(439)       Save
Considering the problems that the traditional focused crawler is easy to fall into local optimum and has insufficient topic description, a focused crawler method combining Ontology and Improved Tabu Search (On-ITS) was proposed. First, the topic semantic vector was calculated by ontology semantic similarity, and the Web page text feature vector was constructed by Hyper Text Markup Language (HTML) Web page text feature position weighting. Then, the vector space model was used to calculate the topic relevance of Web pages. On this basis, in order to analyze the comprehensive priority of link, the topic relevance of the link anchor text and the PR (PageRank) value of Web page to the link were calculated. In addition, to avoid the crawler falling into local optimum, the focused crawler based on ITS was designed to optimize the crawling queue. Experimental results of the focused crawler on the topics of rainstorm disaster and typhoon disaster show that, under the same environment, the accuracy of the On-ITS method is higher than those of the contrast algorithms by maximum of 58% and minimum of 8%, and other evaluation indicators of the proposed algorithm are also very excellent. On-ITS focused crawler method can effectively improve the accuracy of obtaining domain information and catch more topic-related Web pages.
Reference | Related Articles | Metrics
Mixed non-convex and non-smooth regularization constraint based blind image restoration
GENG Yuanqian, WU Chuansheng, LIU Wen
Journal of Computer Applications    2020, 40 (4): 1171-1176.   DOI: 10.11772/j.issn.1001-9081.2019091647
Abstract468)      PDF (2187KB)(336)       Save
In order to restore high-quality clear images,a regularization constraint based blind image restoration method was proposed. Firstly,in order to improve the accuracy of blur kernel estimation,the regularization term of L 0-norm was used to perform sparsity constraint to the blur kernel according to the sparsity of blur kernel. Secondly,in order to retain the edge information of the image,the L 0-norm of combining both first and second order image gradients was used to perform regularized constraint to the image gradient according to the sparsity of image gradient. Finally,since the proposed mixed regularization constraint model is essentially a non-convex and non-smooth optimization problem,the model was solved by the alternating direction method of multipliers,and the clear image was restored by using the L 1-nom data fitting term and total variation method in the non-blind deconvolution stage. Experimental results show that the proposed method can restore clearer details and edge information,and has higher quality of restoration result.
Reference | Related Articles | Metrics
Video compression artifact removal algorithm based on adaptive separable convolution network
NIE Kehui, LIU Wenzhe, TONG Tong, DU Min, GAO Qinquan
Journal of Computer Applications    2019, 39 (5): 1473-1479.   DOI: 10.11772/j.issn.1001-9081.2018081801
Abstract526)      PDF (1268KB)(333)       Save
The existing optical flow estimation methods, which are frequently used in video quality enhancement and super-resolution reconstruction tasks, can only estimate the linear motion between pixels. In order to solve this problem, a new multi-frame compression artifact removal network architecture was proposed. The network consisted of motion compensation module and compression artifact removal module. With the traditional optical flow estimation algorithms replaced with the adaptive separable convolution, the motion compensation module was able to handle with the curvilinear motion between pixels, which was not able to be well solved by optical flow methods. For each video frame, a corresponding convolutional kernel was generated by the motion compensation module based on the image structure and the local displacement of pixels. After that, motion offsets were estimated and pixels were compensated in the next frame by means of local convolution. The obtained compensated frame and the original next frame were combined together as input for the compression artifact removal module. By fusing different pixel information of the two frames, the compression artifacts of the original frame were removed. Compared with the state-of-the-art Multi-Frame Quality Enhancement (MFQE) algorithm on the same training and testing datasets, the proposed network has the improvement of Peak Signal-to-Noise Ratio (Δ PSNR) increased by 0.44 dB at most and 0.32 dB on average. The experimental results demonstrate that the proposed network performs well in removing video compression artifacts.
Reference | Related Articles | Metrics
Energy consumption-reducing algorithm based on WiMAX networks
LIU Wenzhi, LIU Zhaobin
Journal of Computer Applications    2017, 37 (7): 1866-1872.   DOI: 10.11772/j.issn.1001-9081.2017.07.1866
Abstract450)      PDF (1188KB)(346)       Save
To overcome the shortcoming in the energy-saving algorithm of WiMAX networks that energy is unnecessarily wasted by idle mobile stations because of channel quality difference, according to the energy-saving class Ⅰ of the IEEE 802.16e standard, the formalization of Average Energy Consumption (AEC) was given for Mobile Station (MS) Quality of Service (QoS), and an algorithm of Idle-state Avoidance and Virtual Burst (IAVB) based on channel quality balancing was proposed. In this algorithm, the strategy combining a threshold of idle-state with choosing the main mobile terminal based on channel quality balancing was adopted, and the end condition of virtual-burst was perfected to avoid the energy waste between idle-state nodes when the virtual-burst did not end properly. The simulation results show that the energy saving performance of IAVB algorithm is 15% higher than that of the Longest Virtual Burst First (LVBF) algorithm. The experimental results show that the proposed algorithm can control the energy consumption of idle-state and improve the resource utilization efficiency of the WiMAX network.
Reference | Related Articles | Metrics
Review on HDD-SSD hybrid storage
CHEN Zhen, LIU Wenjie, ZHANG Xiao, BO Hailong
Journal of Computer Applications    2017, 37 (5): 1217-1222.   DOI: 10.11772/j.issn.1001-9081.2017.05.1217
Abstract787)      PDF (995KB)(848)       Save
The explosion of data in big data environment brings great challenges to the system structure and capacity of storage system. Nowadays, the development of storage systems tends to be large capacity, low-cost, and high performance. Meanwhile, storage devices such as conventional rotated magnetic Hard Disk Drive (HDD), Solid State Drive (SSD) and Non-Volatile Random Access Memory (NVRAM) have limitations caused by their intrinsic characteristics, leading to the fact that a single kind of storage device cannot meet the requirements above. Hybrid storage which utilized different storage medium was a good solution to this problem. SSD, as a kind of memory wiht high reliability, low energy consumption, high performance is more and more widely applied to hybrid storage system. By combining magnetic disk with the solid-state drives, the advantages of the high performance of SSD and the low-cost, high-capacity features of HDD were taken. The hybrid storage could provide users with large capacity of storage space, guarantee the system's high performance, at the same time reduced the cost. The current research status of the SSD-HDD hybrid storage system was described, different SSD-HDD hybrid storage systems were summarized and classified. In view of two different HDD-SSD hybrid storage architectures, the key technologies and insufficiencies of which were discussed. Prediction of trend and the research focus in the hybrid storage future were discussed at last.
Reference | Related Articles | Metrics
Traffic sign recognition based on optimized convolutional neural network architecture
WANG Xiaobin, HUANG Jinjie, LIU Wenju
Journal of Computer Applications    2017, 37 (2): 530-534.   DOI: 10.11772/j.issn.1001-9081.2017.02.0530
Abstract546)      PDF (868KB)(895)       Save
In the existing algorithms for traffic sign recognition, sometimes the training time is short but the recognition rate is low, and other times the recognition rate is high but the training time is long. To resolve these problems, the Convolutional Neural Network (CNN) architecture was optimized by using Batch Normalization (BN) method, Greedy Layer-Wise Pretraining (GLP) method and replacing classifier with Support Vector Machine (SVM), and a new traffic sign recognition algorithm based on optimized CNN architecture was proposed. BN method was used to change the data distribution of the middle layer, and the output data of convolutional layer was normalized to the mean value of 0 and the variance value of 1, thus accelerating the training convergence and reducing the training time. By using the GLP method, the first layer of convolutional network was trained with its parameters preserved when the training was over, then the second layer was also trained with the parameters preserved until all the convolution layers were trained completely. The GLP method can effectively improve the recognition rate of the convolutional network. The SVM classifier only focused on the samples with error classification and no longer processed the correct samples, thus speeding up the training. The experiments were conducted on Germany traffic sign recognition benchmark, the results showed that compared with the traditional CNN, the training time of the new algorithm was reduced by 20.67%, and the recognition rate of the new algorithm reached 98.24%. The experimental results prove that the new algorithm greatly shortens the training time and reached a high recognition rate by optimizing the structure of the traditional CNN.
Reference | Related Articles | Metrics
Super pixel segmentation algorithm based on Hadoop
WANG Chunbo, DONG Hongbin, YIN Guisheng, LIU Wenjie
Journal of Computer Applications    2016, 36 (11): 2985-2992.   DOI: 10.11772/j.issn.1001-9081.2016.11.2985
Abstract710)      PDF (1313KB)(529)       Save
In view of the high time complexity of pixel segmentation, a super pixel segmentation algorithm was proposed for high resolution image. Super pixels instead of the original pixels were used as the segmentation processing elements and the characteristics of Hadoop and the super pixels were combined. Firstly, a static and dynamic adaptive algorithm for multiple tasks was proposed which could reduce the coupling of the blocks in HDFS (Hadoop Distributed File System) and task arranging. Secondly, based on the constraint in the distance and gradient on the super pixel formed by the boundary of super pixel block, a parallel watershed segmentation algorithm was proposed in each Map node task. Meanwhile, two merging strategies were proposed and compared in the super pixel block merging in the Shuffle process. Finally, the combination of super pixels was optimized to complete the final segmentation in the Reduce node task. The experimental results show that the proposed algorithm is superior to the Simple Linear Iterative Cluster (SLIC) algorithm and Normalized cut (Ncut) algorithm in Boundary Recall ratio (BR) and Under segmentation Error (UE), and the segmentation time of the high-resolution image is remarkably decreased.
Reference | Related Articles | Metrics
High-efficient community-based message transmission scheme in opportunistic network
YAO Yukun, YANG Jikai, LIU Wenhui
Journal of Computer Applications    2015, 35 (9): 2447-2452.   DOI: 10.11772/j.issn.1001-9081.2015.09.2447
Abstract369)      PDF (894KB)(374)       Save
To deal with the problems that message-distributed tasks are backlogged by nodes in the inner community and active nodes are blindly selected to transmit message in the Community-based Message Transmission Scheme in Opportunistic Social Network (OSNCMTS), a High-Efficient Community-based Message Transmission Scheme in opportunistic network (HECMTS) was proposed. In HECMTS, firstly, communities were divided by Extremal Optimization (EO) algorithm and the corresponding community matrices were distributed to nodes; secondly, the copies of message were assigned based on the community matrices and the success rate of data packets to destination nodes; finally, the active nodes' information was collected by the opportunities of active nodes' back and forth in different communities, then the suitable nodes were selected to finish message transmitting between communities by querying the active node information. The simulation results show that routing overhead in HECMTS is decreased by at least 19% and the average end-to-end delay is decreased by at least 16% compared with OSNCMTS.
Reference | Related Articles | Metrics
Logarithmic adaption crowding genetic algorithm for multimodal function optimization
LIU Wentao HU Jiabao
Journal of Computer Applications    2014, 34 (6): 1645-1648.   DOI: 10.11772/j.issn.1001-9081.2014.06.1645
Abstract274)      PDF (717KB)(290)       Save

Crowding genetic algorithm can obtain multiple optima of multimodal functions, but it has low efficiency, and cannot get a higher precision in limited iterations. In order to obtain all optima of the multimodal function quickly, the crowding genetic algorithm based on logarithmic adaption was presented combined with niche crowding genetic and climbing operators. The algorithm computed the distance values of climbing operators by logarithmic adaption according to the iterations, which made the population maintain genetic diversity in the process. According to the experiments and comparative analysis of several one-dimensional and two-dimensional multimodal functions, the test results show that the algorithm can ensure both the solution accuracy rate and the convergence speed in the limited iterations, and obtain all optimal solutions more stably. It is proved to be an effective algorithm for the multimodal function problems.

Reference | Related Articles | Metrics
Culling of foreign matter fake information in detection of subminiature accessory based on prior knowledge
ZHEN Rongjie WANG Zhong LIU Wenjing GOU Jiansong
Journal of Computer Applications    2014, 34 (5): 1458-1462.   DOI: 10.11772/j.issn.1001-9081.2014.05.1458
Abstract196)      PDF (810KB)(385)       Save

In visual detection of subminiature accessory, the extracted target contour will be affected by the existence of foreign matter in the field like dust and hair crumbs. In order to avoid the impact for measurement brought by foreign matter, a method of culling foreign matter fake information based on prior knowledge was put forward. Firstly, the corners of component image with foreign matter were detected. Secondly, the corner-distribution features of standard component were obtained by statistics. Finally, the judgment condition of foreign matter fake imformation was derived from the corner-distribution features of standard component to cull the foreign matter fake information. Through successful application in an actual engineering project, the processing experiments on three typical images with foreign matter prove that the proposed algorithm ensures the accuracy of the measurement, while effectively culling the foreign matter fake information in the images.

Reference | Related Articles | Metrics
Secure quantum communication protocol based on symmetric W state and identity authentication
LIU Chao GENG Huantong LIU Wenjie
Journal of Computer Applications    2014, 34 (2): 438-441.  
Abstract556)      PDF (600KB)(557)       Save
Due to the better robustness of entangled W state, it is considered to be better qualified as information carrier for quantum information processing and quantum secure communication. Recently, many kinds of quantum direct communication protocols based on four-particle W state or three-particle asymmetric W state have been proposed. However, these protocols either have low communication efficiency or are difficult to be implemented in physics experiment. By utilizing three-particle W state and the mechanism of quantum identity authentication, a new deterministic secure quantum communication protocol was proposed. This protocol consisted of five parts: authentication code generating, quantum states preparation, quantum states distribution, eavesdropping check and identity authentication, and information communication. The two participants only needed to perform the two-particle Bell measurement, the single-particle 〖WTBX〗Z〖WTBZ〗-basis or 〖WTBX〗X〖WTBZ〗-basis measurement, and the communication efficiency was also improved: One W state can transmit one classical bit of information. The security analysis shows that the present protocol is secure against various eavesdropper Eve's attacks and impersonation attacks, so it has better security.
Related Articles | Metrics
Broadcast-based multiparty remote state preparation protocol
GENG Huantong JIA Tingting LIU Wenjie
Journal of Computer Applications    2013, 33 (12): 3385-3388.  
Abstract559)      PDF (617KB)(346)       Save
Remote State Preparation (RSP) is a branch of quantum information processing. In order to prepare the same state to multiple recipients, a broadcast-based RSP protocol was proposed for one sender and two receivers, and then it was extend to N receivers. GHZ states were used as shared quantum resources. By constructing two special measurement bases, the sender realized two times of multi-particle projective measurement; then the receivers used unitary operation to recover the prepared state depending on the measurement results. Analysis shows that multiparty RSP protocol can be used to broadcast information in quantum network.
Related Articles | Metrics
Improved tone modeling by exploiting articulatory features for Mandarin speech recognition
CHAO Hao YANG Zhanlei LIU Wenju
Journal of Computer Applications    2013, 33 (10): 2939-2944.  
Abstract497)      PDF (1052KB)(529)       Save
Articulatory features, which represent the articulatory information, can help prosodic features to improve the performance of tone recognition. In this paper, a set of 19 pronunciation categories was given according to the pronunciation characteristics of initials and finals. Besides, 19 articulatory tandem features, which are the posteriors of speech signal belonging to the 19 pronunciation categories, were obtained by hierarchical multilayer perceptron classifiers. Then these articulatory tandem features, as well as prosodic features, were used for tone modeling. Tone recognition experiments of three kinds of tone models indicate that about 5% absolute increase of accuracy can be achieved when using both articulatory features and prosodic features. When the proposed tone model is integrated into LVSCR (Large Vocabulary Continuous Speech Recognition) system, the character error rate is reduced significantly.
Related Articles | Metrics
Plant leaf recognition method based on clonal selection algorithm and K nearest neighbor
ZHANG Ning LIU Wenping
Journal of Computer Applications    2013, 33 (07): 2009-2013.   DOI: 10.11772/j.issn.1001-9081.2013.07.2009
Abstract854)      PDF (782KB)(680)       Save
To decrease the time of classifier design and training, a new method combining the Clonal Selection Algorithm and K Nearest Neighbor (CSA+KNN) was proposed. Having the image preprocessed and getting the comprehensive features information from geometry and texture feature, the CSA+KNN was used to train and classify the plant leaf samples. The plant leaf database with 100 leaf species was applied to test the proposed algorithm, and the recognition accuracy was 91.37%. Compared with other methods, the experimental results demonstrate the efficiency, accuracy and high training speed of the proposed method, and verify the significance of texture features in leaf recognition. CSA+KNN method broadens the field of plant leaf recognition method, and it can be applied to create digitalized plant specimens museum.
Reference | Related Articles | Metrics
Improved syllable-based acoustic modeling for continuous Chinese speech recognition
CHAO Hao YANG Zhanlei LIU Wenju
Journal of Computer Applications    2013, 33 (06): 1742-1745.   DOI: 10.3724/SP.J.1087.2013.01742
Abstract897)      PDF (691KB)(662)       Save
Concerning the changeability of the speech signal caused by co-articulation phenomenon in Chinese speech recognition, a syllable-based acoustic modeling method was proposed. Firstly, context independent syllable-based acoustic models were trained, and the models were initialized by intra-syllable IFs based diphones to solve the problem of training data sparsity. Secondly, the inter-syllable co-articulation effect was captured by incorporating inter-syllable transition models into the recognition system. The experiments conducted on “863-test” dataset show that the relative character error rate is reduced by 12.13%. This proves that syllable-based acoustic model and inter-syllable transition model are effective in solving co-articulation effect.
Reference | Related Articles | Metrics
Channel allocation model and credibility evaluation for LBS indoor nodes
LIU Zhaobin LIU Wenzhi FANG Ligang TANG Yazhe
Journal of Computer Applications    2013, 33 (03): 603-606.   DOI: 10.3724/SP.J.1087.2013.00603
Abstract884)      PDF (663KB)(859)       Save
In response to the issue that GPS is unable to carry out Location-Based Service (LBS) in indoor environment, a LBS indoor channel allocation model, credibility evaluation and control method was presented in this paper, which integrated GPS, Wi-Fi, ZigBee and Bluetooth technologies. It solved the problem arising from combination channel allocation, including the evaluation of the traffic load, the available Radio Frequency (RF), and non-overlapping RF channels number of each node. Each Access Point(AP)'s signal strength built the prediction model with reference point. The optimization algorithm was designed to determine and select the credibility of combination channel based on the energy evaluation. It adaptively selected neighbors with highest comprehensive effects to participate in iterative optimization. The simulation result indicates this method can effectively inhibit the proliferation of communication interference error in the network, reduce the positioning complexity, and improve the positioning accuracy in addition to improving scalability and robustness of the entire network.
Reference | Related Articles | Metrics
Face recognition algorithm based on multi-level texture spectrum features and PCA
DANG Xin-peng LIU Wen-ping
Journal of Computer Applications    2012, 32 (08): 2316-2319.   DOI: 10.3724/SP.J.1087.2012.02316
Abstract1006)      PDF (603KB)(338)       Save
To improve the recognition rate of Principal Component Analysis (PCA) algorithm in face recognition, a new algorithm combining the image texture spectrum feature with PCA was proposed. Firstly, the texture unit operator was used to extract the texture spectrum feature of the face image. Secondly, PCA approach was used to reduce the dimensions of the texture spectrum feature. Finally, K-Nearest Neighbor (KNN) classification was chosen to recognize the face. ORL and Yale face database were used to test the proposed algorithm, and the recognition accuracies were 96.5% and 95% respectively, which were higher than those of PCA and Modular Two-Dimensional PCA (M2DPCA). The experimental results demonstrate the efficiency and accuracy of the proposed algorithm.
Reference | Related Articles | Metrics
Objective quality evaluation method of stereo image based on steerable pyramid
WEI Jin-jin LI Su-mei LIU Wen-juan ZANG Yan-jun
Journal of Computer Applications    2012, 32 (03): 710-714.   DOI: 10.3724/SP.J.1087.2012.00710
Abstract1219)      PDF (797KB)(546)       Save
Through analyzing and simulating human visual perception of stereo image, an objective quality evaluation method of stereo image was proposed. The method combined the characteristics of Human Visual System (HVS) with Structural Similarity, using steerable pyramid to simulate multi-channel effects. Meanwhile, the proposed method used stereo matching algorithm to assess the stereo sense. The experimental results show that the proposed objective method achieves consistent stereoscopic image quality evaluation result with subjective assessment and can better reflect the level of image quality and stereo sense.
Reference | Related Articles | Metrics